Personnel
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Robust state estimation (Sensor fusion)

This research is the follow up of Agostino Martinelli's investigations carried out during the last four years, which are in the framework of the visual and inertial sensor fusion problem and the unknown input observability problem.

Visual-inertial structure from motion

Participants : Agostino Martinelli, Alexander Oliva, Alessandro Renzaglia.

During this year we achieved the following two objectives:

  1. (Theoretical) Extension of the closed form solution introduced in [64] to the cooperative case.

  2. (Experimental) Improvement of one order of magnitude on the precision on the absolute scale determined by our closed form solution introduced in [64];

Regarding the first objective, we obtained a new theoretical and basic result in the framework of cooperative visual-inertial sensor fusion. Specifically, the case of two agents was investigated. First, the entire observable state was analytically derived. This state includes the relative position between the two aerial vehicles (which includes the absolute scale), the relative velocity and the three Euler angles that express the rotation between the two vehicle frames. Then, the basic equations that describe this system were analytically obtained. These results have been presented at the first international symposium on multi robot and multi agent systems [69]. Finally, we extended the closed form solution introduced in [64] to the cooperative case. Specifically, the observable state was expressed in closed form in terms of the measurements provided by monocular vision and inertial sensors, during a short time interval. We believe that this is a fundamental theoretical result, since it allows us from one side to automatically retrieve the absolute scale in closed form (and consequently without prior knowledge) even without observing external point features in the environment and, on the other side, to carry out a theoretical investigation that will allow us to detect all the system singularities. Extensive simulations and real experiments clearly show that the proposed solution is successful.

Regarding the second objective, our former experimental implementation provided a precision on the absolute scale in the range 10%-20% (all the details about the experimental setup are available in [55]). These former results were obtained in collaboration with the Robotics and Perception Group at the university of Zurich, in the framework of the ANR-VIMAD project. Specifically, the experiments were carried out in Zurich. This year, by an extensive use of the platform KINOVIS available at Inria, we investigated the impact of several sources of systematic error (imperfect extrinsic camera calibration, time delay and imperfect time alignment between sensors, etc.). We used simple methods to remove these error sources and we achieved the precision on the absolute scale in the range 1%-5%.

Unknown Input Observability

Participant : Agostino Martinelli.

During this year I achieved the following two objectives:

  1. (Theoretical) Extension of the analytic solution presented in [66] to the driftless multiple unknown inputs case.

  2. (Theoretical) Application of the solution in [66] to several problems, in the framework of computer vision, neuroscience and robotics.

Regarding the former objective, I obtained the general analytic solution of the nonlinear unknown input observability problem. As for the observability rank condition, the analytic criterion in presence of unknown inputs is based on the computation of the observable codistribution by a recursive and convergent algorithm. The algorithm is unexpectedly simple and can be easily and automatically applied to nonlinear systems driven by both known and unknown inputs, independently of their complexity and type of nonlinearity. Very surprisingly, the complexity of the overall analytic criterion is comparable to the complexity of the standard method to check the state observability in the case without unknown inputs (i.e., the observability rank condition). Given any nonlinear system characterized by any type of nonlinearity, driven by both known and unknown inputs, the state observability is obtained automatically, i.e., by following a systematic procedure (e.g., by the usage of a very simple code that uses symbolic computation). This is a fundamental practical (and unexpected) advantage. On the other hand, the analytic derivations and all the proofs necessary to analytically derive the algorithm and its convergence properties and to prove their general validity are very complex and they are extensively based on an ingenious analogy with the theory of General Relativity. In practice, these derivations largely use Ricci calculus with tensors (in particular, I largely adopt the Einstein notation to achieve notational brevity). All the results are fully described in a book, which is supposed to be published during the next year. A first draft of the book is now available on ArXiv (arXiv:1704.03252).

Regarding the second objective, the solution that holds in the driftless case and in presence of a single unknown input ( [66]) has been used to investigate several problems. These problems include:

  1. The unicycle in presence of a single disturbance (which has been presented at the SIAM on Control and Application 2017, [68]).

  2. Vehicle moving in 3D in presence of a disturbance (which has been presented at the IROS 2017, [67])

Finally, the visual and inertial sensor fusion problem, when some of the inputs are unknown, has been investigated both in 2D and in 3D. All the results are described in chapter 5 of the book available on ArXiv (arXiv:1704.03252).